Goto

Collaborating Authors

 adversarial point


Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints Maura Pintor

Neural Information Processing Systems

Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified. The inherent complexity of the underlying optimization requires current gradient-based attacks to be carefully tuned, initialized, and possibly executed for many computationally-demanding iterations, even if specialized to a given perturbation model.






Robust Federated Personalised Mean Estimation for the Gaussian Mixture Model

Managoli, Malhar A., Prabhakaran, Vinod M., Diggavi, Suhas

arXiv.org Artificial Intelligence

--Federated learning with heterogeneous data and personalization has received significant recent attention. Separately, robustness to corrupted data in the context of federated learning has also been studied. In this paper we explore combining personalization for heterogeneous data with robustness, where a constant fraction of the clients are corrupted. Motivated by this broad problem, we formulate a simple instantiation which captures some of its difficulty. We focus on the specific problem of personalized mean estimation where the data is drawn from a Gaussian mixture model. We give an algorithm whose error depends almost linearly on the ratio of corrupted to uncorrupted samples, and show a lower bound with the same behavior, albeit with a gap of a constant factor . Federated learning (FL) is a distributed system approach to collaboratively build machine learning models from multiple clients, without directly sharing limited local data [1], [2].


Exploring Adversarial Robustness of LiDAR-Camera Fusion Model in Autonomous Driving

Yang, Bo, Ji, Xiaoyu, Jin, Zizhi, Cheng, Yushi, Xu, Wenyuan

arXiv.org Artificial Intelligence

Our study assesses the adversarial robustness of LiDAR-camera fusion models in 3D object detection. We introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a car, can make the car undetectable by the fusion model. Experimental results reveal that even without changes to the image data channel, the fusion model can be deceived solely by manipulating the LiDAR data channel. This finding raises safety concerns in the field of autonomous driving. Further, we explore how the quantity of adversarial points, the distance between the front-near car and the LiDAR-equipped car, and various angular factors affect the attack success rate. We believe our research can contribute to the understanding of multi-sensor robustness, offering insights and guidance to enhance the safety of autonomous driving.


Model-Free Prediction of Adversarial Drop Points in 3D Point Clouds

Naderi, Hanieh, Dinesh, Chinthaka, Bajic, Ivan V., Kasaei, Shohreh

arXiv.org Artificial Intelligence

Adversarial attacks pose serious challenges for deep neural network (DNN)-based analysis of various input signals. In the case of 3D point clouds, methods have been developed to identify points that play a key role in the network decision, and these become crucial in generating existing adversarial attacks. For example, a saliency map approach is a popular method for identifying adversarial drop points, whose removal would significantly impact the network decision. Generally, methods for identifying adversarial points rely on the deep model itself in order to determine which points are critically important for the model's decision. This paper aims to provide a novel viewpoint on this problem, in which adversarial points can be predicted independently of the model. To this end, we define 14 point cloud features and use multiple linear regression to examine whether these features can be used for model-free adversarial point prediction, and which combination of features is best suited for this purpose. Experiments show that a suitable combination of features is able to predict adversarial points of three different networks -- PointNet, PointNet++, and DGCNN -- significantly better than a random guess. The results also provide further insight into DNNs for point cloud analysis, by showing which features play key roles in their decision-making process.


Alleviating Adversarial Attacks on Variational Autoencoders with MCMC

Kuzina, Anna, Welling, Max, Tomczak, Jakub M.

arXiv.org Artificial Intelligence

Variational autoencoders (VAEs) are latent variable models that can generate complex objects and provide meaningful latent representations. Moreover, they could be further used in downstream tasks such as classification. As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input. Here, we examine several objective functions for adversarial attack construction proposed previously and present a solution to alleviate the effect of these attacks. Our method utilizes the Markov Chain Monte Carlo (MCMC) technique in the inference step that we motivate with a theoretical analysis. Thus, we do not incorporate any extra costs during training, and the performance on non-attacked inputs is not decreased. We validate our approach on a variety of datasets (MNIST, Fashion MNIST, Color MNIST, CelebA) and VAE configurations ($\beta$-VAE, NVAE, $\beta$-TCVAE), and show that our approach consistently improves the model robustness to adversarial attacks.


Leveraging Reinforcement Learning for evaluating Robustness of KNN Search Algorithms

Vadiraja, Pramod, Balada, Christoph Peter

arXiv.org Artificial Intelligence

The problem of finding K-nearest neighbors in the given dataset for a given query point has been worked upon since several years. In very high dimensional spaces the K-nearest neighbor search (KNNS) suffers in terms of complexity in computation of high dimensional distances. With the issue of curse of dimensionality, it gets quite tedious to reliably bank on the results of variety approximate nearest neighbor search approaches. In this paper, we survey some novel K-Nearest Neighbor Search approaches that tackles the problem of Search from the perspectives of computations, the accuracy of approximated results and leveraging parallelism to speed-up computations. We attempt to derive a relationship between the true positive and false points for a given KNNS approach. Finally, in order to evaluate the robustness of a KNNS approach against adversarial points, we propose a generic Reinforcement Learning based framework for the same.